77 research outputs found

    MOBILE ASSISTIVE TECHNOLOGIES FOR PEOPLE WITH VISUAL IMPAIRMENT: SENSING AND CONVEYING INFORMATION TO SUPPORT ORIENTATION, MOBILITY AND ACCESS TO IMAGES

    Get PDF
    Smartphones are accessible to persons with visual impairment or blindness (VIB): screen reader technologies, integrated with mobile operating systems, enable non-visual interaction with the device. Also, features like GPS receivers, inertial sensors and cameras enable the development of Mobile Assistive Technologies (MATs) to support people with VIB. A preliminary analysis, conducted adopting an user-centric approach, highlighted some issues experienced by people with VIB in everyday activities from three main fields: orientation, mobility and access to images. Traditional approaches to address these issues, based on assistive tools and technologies, have some limitations: in the field of mobility, for example, existing navigation support solutions (e.g. the white cane) cannot be used to perceive some environmental features like crosswalks or the current state of traffic lights; in the field of orientation, tactile maps adopted to develop cognitive maps of the environment are limited in the amount of information that can be represented on a single surface and by the lack of interactivity, two issues experienced also in other fields where access to graphical information is of paramount importance like, for example, didactics of STEM subjects. This work presents new MATs that deal with these limitations by introducing novel solutions in different fields of Computer Science. Original computer vision techniques, designed to detect the presence of pedestrian crossings and the state of traffic lights, are used to sense information from the environment and support mobility of people with VIB. Novel sonification techniques are introduced to efficiently convey information with three different goals: first, to convey guidance information in urban crossings; second, to enhance the development of cognitive maps by augmenting tactile surfaces; third, to enable quick access to images. Experience reported in this dissertation shows that the proposed MATs are effective in supporting people with VIB and, in general, that mobile devices are a versatile platform to enable affordable and pervasive access to assistive technologies. Involving target users in the evaluation of MATs emerged as a major challenge in this work. However, it is shown how such challenge can be addressed by adopting large scale evaluation techniques typical of HCI research

    PlugSonic: a web- and mobile-based platform for binaural audio and sonic narratives

    Get PDF
    PlugSonic is a suite of web- and mobile-based applications for the curation and experience of binaural interactive soundscapes and sonic narratives. It was developed as part of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and Participation) and consists of two main applications: PlugSonic Sample, to edit and apply audio effects, and PlugSonic Soundscape, to create and experience binaural soundscapes. The audio processing within PlugSonic is based on the Web Audio API and the 3D Tune-In Toolkit, while the exploration of soundscapes in a physical space is obtained using Apple's ARKit. In this paper we present the design choices, the user involvement processes and the implementation details. The main goal of PlugSonic is technology democratisation; PlugSonic users - whether institutions or citizens - are all given the instruments needed to create, process and experience 3D soundscapes and sonic narrative; without the need for specific devices, external tools (software and/or hardware), specialised knowledge or custom development. The evaluation, which was conducted with inexperienced users on three tasks - creation, curation and experience - demonstrates how PlugSonic is indeed a simple, effective, yet powerful tool

    Design and evaluation of a web-and mobile-based binaural audio platform for cultural heritage

    Get PDF
    PlugSonic is a suite of web- and mobile-based applications for the curation and experience of 3D interactive soundscapes and sonic narratives in the cultural heritage context. It was developed as part of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and Participation) and consists of two main applications: PlugSonic Sample, to edit and apply audio effects, and PlugSonic Soundscape, to create and experience 3D soundscapes for headphones playback. The audio processing within PlugSonic is based on the Web Audio API and the 3D Tune-In Toolkit, while the mobile exploration of soundscapes in a physical space is obtained using Apple’s ARKit. The main goal of PlugSonic is technology democratisation; PlugSonic users-whether cultural institutions or citizens-are all given the instruments needed to create, process and experience 3D soundscapes and sonic narratives; without the need for specific devices, external tools (software and/or hardware), specialised knowledge or custom development. The aims of this paper are to present the design and development choices, the user involvement processes as well as a final evaluation conducted with inexperienced users on three tasks (creation, curation and experience), demonstrating how PlugSonic is indeed a simple, effective, yet powerful tool

    Towards Large Scale Evaluation of Novel Sonification Techniques for Non Visual Shape Exploration

    No full text
    © 2015 ACM.There are several situations in which a person with visual impairment or blindness needs to extract information from an image. Examples include everyday activities, like reading a map, as well as educational activities, like exercises to develop visuospatial skills. In this contribution we propose a set of 6 sonification techniques to recognize simple shapes on touchscreen devices. The effectiveness of these sonification techniques is evaluated though Invisible Puzzle, a mobile application that makes it possible to conduct non-supervised evaluation sessions. Invisible Puzzle adopts a gamification approach and is a preliminary step in the development of a complete game that will make it possible to conduct a large scale evaluation with hundreds or thousands of blind users. With Invisible Puzzle we conducted 131 tests with sighted subjects and 18 tests with subjects with blindness. All subjects involved in the process successfully completed the evaluation session, with high engagement, hence showing the effectiveness of the evaluation procedure. Results give interesting insights on the differences among the sonification techniques and, most importantly, show that, after a short training, subjects are able to identify many different shapes

    Accessible Mathematics on Touchscreen Devices: New Opportunities for People with Visual Impairments

    Get PDF
    In recent years educational applications for touchscreen devices (e.g., tablets) become widespread all over the world. While these devices are accessible to people with visual impairments, educational applications to support learning of STEM subjects are often not accessible to visually impaired people due to inaccessible graphics. This contribution addresses the problem of conveying graphics to visual impaired users. Two approaches are taken into account: audio icons and image sonification. In order to evaluate the applicability of these approaches, we report our experience in the development of two didactic applications for touchscreen devices, specifically designed to support people with visual impairments or blindness while studying STEM subjects: Math Melodies and Audio Functions. The former is a commercial application to support children in primary school in an inclusive class. It adopts an interaction paradigm based on audio icons. The latter is a prototype application aimed at enabling visually impaired students to explore function diagrams and adopts an image sonification approach

    Insights on Assistive Orientation and Mobility of People with Visual Impairment Based on Large-Scale Longitudinal Data

    Get PDF
    Assistive applications for orientation and mobility promote independence for people with visual impairment (PVI). While typical design and evaluation of such applications involves small-sample iterative studies, we analyze large-scale longitudinal data from a geographically diverse population. Our publicly released dataset from iMove, a mobile app supporting orientation of PVI, contains millions of interactions by thousands of users over a year. Our analysis (i) examines common functionalities, settings, assistive features, and movement modalities in iMove dataset and (ii) discovers user communities based on interaction patterns. We find that the most popular interaction mode is passive, where users receive more notifications, often verbose, while in motion and perform fewer actions. The use of built-in assistive features such as enlarged text indicate a high presence of users with residual sight. Users fall into three distinct groups: (C1) users interested in surrounding points of interest, (C2) users interacting in short bursts to inquire about current location, and (C3) users with long active sessions while in motion. iMove was designed with C3 in mind, and one strength of our contribution is providing meaningful semantics for unanticipated groups, C1 and C2. Our analysis reveals insights that can be generalized to other assistive orientation and mobility applications

    Supporting pedestrians with visual impairment during road crossing: a mobile application for traffic lights detection

    Get PDF
    Many traffic lights are still not equipped with acoustic signals. It is possible to recognize the traffic light color from a mobile device, but this requires a technique that is stable under different illumination conditions. This contribution presents TL-recognizer, an application that recognizes traffic lights from a mobile device camera. The proposed solution includes a robust setup for image capture as well as an image processing technique. Experimental results give evidence that the proposed solution is practical

    JustPoint: Identifying colors with a natural user interface

    Get PDF
    People with severe visual impairments usually have no way of identifying the colors of objects in their environment. While existing smartphone apps can recognize colors and speak them aloud, they require the user to center the object of interest in the camera's field of view, which is challenging for many users. We developed a smartphone app to address this problem that reads aloud the color of the object pointed to by the user's fingertip, without confusion from background colors. We evaluated the app with nine people who are blind, demonstrating the app's effectiveness and suggesting directions for improvements in the future
    • …
    corecore